Code
import pandas as pd
import numpy as np
from pprint import pprint
kakamana
April 9, 2023
The purpose of this chapter is to introduce you to a popular automated hyperparameter tuning method referred to as Grid Search. In this lesson, you will gain an understanding of what it is, how it works, and how to conduct a Grid Search using Scikit Learn. Afterwards, you will learn how to analyze the output of a Grid Search and gain practical experience in doing so.
This Grid search is part of Datacamp course: Hyperparameter Tuning in Python Hyperparameters play a significant role in the development of powerful machine learning models. However, with increasingly complex models with numerous options available, how can you efficiently identify the best settings for your particular issue? You will gain practical experience using some common methodologies for automated hyperparameter tuning in Python using Scikit Learn. Among these are Grid Search, Random Search, and advanced optimization methodologies such as Bayesian and Genetic algorithms. To dramatically increase the efficiency and effectiveness of your machine learning model creation, you will use a dataset predicting credit card defaults.
This is my learning experience of data science through DataCamp. These repository contributions are part of my learning journey through my graduate program masters of applied data sciences (MADS) at University Of Michigan, DeepLearning.AI, Coursera & DataCamp. You can find my similar articles & more stories at my medium & LinkedIn profile. I am available at kaggle & github blogs & github repos. Thank you for your motivation, support & valuable feedback.
These include projects, coursework & notebook which I learned through my data science journey. They are created for reproducible & future reference purpose only. All source code, slides or screenshot are intellactual property of respective content authors. If you find these contents beneficial, kindly consider learning subscription from DeepLearning.AI Subscription, Coursera, DataCamp
In data science it is a great idea to try building algorithms, models and processes ‘from scratch’ so you can really understand what is happening at a deeper level. Of course there are great packages and libraries for this work (and we will get to that very soon!) but building from scratch will give you a great edge in your data science work.
In this exercise, you will create a function to take in 2 hyperparameters, build models and return results. You will use this function in a future exercise.
from sklearn.model_selection import train_test_split
credit_card = pd.read_csv('dataset/credit-card-full.csv')
# To change categorical variable with dummy variables
credit_card = pd.get_dummies(credit_card, columns=['SEX', 'EDUCATION', 'MARRIAGE'], drop_first=True)
X = credit_card.drop(['ID', 'default payment next month'], axis=1)
y = credit_card['default payment next month']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, shuffle=True)
ID | LIMIT_BAL | AGE | PAY_0 | PAY_2 | PAY_3 | PAY_4 | PAY_5 | PAY_6 | BILL_AMT1 | ... | SEX_2 | EDUCATION_1 | EDUCATION_2 | EDUCATION_3 | EDUCATION_4 | EDUCATION_5 | EDUCATION_6 | MARRIAGE_1 | MARRIAGE_2 | MARRIAGE_3 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 20000 | 24 | 2 | 2 | -1 | -1 | -2 | -2 | 3913 | ... | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
1 | 2 | 120000 | 26 | -1 | 2 | 0 | 0 | 0 | 2 | 2682 | ... | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2 | 3 | 90000 | 34 | 0 | 0 | 0 | 0 | 0 | 0 | 29239 | ... | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3 | 4 | 50000 | 37 | 0 | 0 | 0 | 0 | 0 | 0 | 46990 | ... | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
4 | 5 | 50000 | 57 | -1 | 0 | -1 | 0 | 0 | 0 | 8617 | ... | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
5 rows × 32 columns
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import accuracy_score
# Create the function
def gbm_grid_search(learn_rate, max_depth):
# Create the model
model = GradientBoostingClassifier(learning_rate=learn_rate, max_depth=max_depth)
# Use the model to make predictions
predictions = model.fit(X_train, y_train).predict(X_test)
# Return the hyperparameters and score
return ([learn_rate, max_depth, accuracy_score(y_test, predictions)])
In this exercise, you will build on the function you previously created to take in 2 hyperparameters, build a model and return the results. You will now use that to loop through some values and then extend this function and loop with another hyperparameter.
[[0.01, 2, 0.8245555555555556],
[0.01, 4, 0.8201111111111111],
[0.01, 6, 0.8184444444444444],
[0.1, 2, 0.8254444444444444],
[0.1, 4, 0.8235555555555556],
[0.1, 6, 0.8197777777777778],
[0.5, 2, 0.8208888888888889],
[0.5, 4, 0.8041111111111111],
[0.5, 6, 0.79]]
def gbm_grid_search_extended(learn_rate, max_depth, subsample):
# Extend the model creation section
model = GradientBoostingClassifier(learning_rate=learn_rate, max_depth=max_depth,
subsample=subsample)
predictions = model.fit(X_train, y_train).predict(X_test)
# Extend the return part
return([learn_rate, max_depth, subsample, accuracy_score(y_test, predictions)])
subsample_list = [0.4, 0.6]
for learn_rate in learn_rate_list:
for max_depth in max_depth_list:
# Extend the for loop
for subsample in subsample_list:
# Extend the results to include the new hyperparameter
results_list.append(gbm_grid_search_extended(learn_rate, max_depth, subsample))
# Print the results
pprint(results_list)
[[0.01, 2, 0.8245555555555556],
[0.01, 4, 0.8201111111111111],
[0.01, 6, 0.8184444444444444],
[0.1, 2, 0.8254444444444444],
[0.1, 4, 0.8235555555555556],
[0.1, 6, 0.8197777777777778],
[0.5, 2, 0.8208888888888889],
[0.5, 4, 0.8041111111111111],
[0.5, 6, 0.79],
[0.01, 2, 0.4, 0.8244444444444444],
[0.01, 2, 0.6, 0.8245555555555556],
[0.01, 4, 0.4, 0.8213333333333334],
[0.01, 4, 0.6, 0.8212222222222222],
[0.01, 6, 0.4, 0.8182222222222222],
[0.01, 6, 0.6, 0.8193333333333334],
[0.1, 2, 0.4, 0.8246666666666667],
[0.1, 2, 0.6, 0.8247777777777778],
[0.1, 4, 0.4, 0.8232222222222222],
[0.1, 4, 0.6, 0.8232222222222222],
[0.1, 6, 0.4, 0.8168888888888889],
[0.1, 6, 0.6, 0.8202222222222222],
[0.5, 2, 0.4, 0.8164444444444444],
[0.5, 2, 0.6, 0.8182222222222222],
[0.5, 4, 0.4, 0.806],
[0.5, 4, 0.6, 0.802],
[0.5, 6, 0.4, 0.7744444444444445],
[0.5, 6, 0.6, 0.779]]
def gbm_grid_search_extended(learn_rate, max_depth, subsample):
# Extend the model creation section
model = GradientBoostingClassifier(learning_rate=learn_rate, max_depth=max_depth,
subsample=subsample)
predictions = model.fit(X_train, y_train).predict(X_test)
# Extend the return part
return([learn_rate, max_depth, subsample, accuracy_score(y_test, predictions)])
subsample_list = [0.4, 0.6]
for learn_rate in learn_rate_list:
for max_depth in max_depth_list:
# Extend the for loop
for subsample in subsample_list:
# Extend the results to include the new hyperparameter
results_list.append(gbm_grid_search_extended(learn_rate, max_depth, subsample))
# Print the results
pprint(results_list)
[[0.01, 2, 0.8245555555555556],
[0.01, 4, 0.8201111111111111],
[0.01, 6, 0.8184444444444444],
[0.1, 2, 0.8254444444444444],
[0.1, 4, 0.8235555555555556],
[0.1, 6, 0.8197777777777778],
[0.5, 2, 0.8208888888888889],
[0.5, 4, 0.8041111111111111],
[0.5, 6, 0.79],
[0.01, 2, 0.4, 0.8244444444444444],
[0.01, 2, 0.6, 0.8245555555555556],
[0.01, 4, 0.4, 0.8213333333333334],
[0.01, 4, 0.6, 0.8212222222222222],
[0.01, 6, 0.4, 0.8182222222222222],
[0.01, 6, 0.6, 0.8193333333333334],
[0.1, 2, 0.4, 0.8246666666666667],
[0.1, 2, 0.6, 0.8247777777777778],
[0.1, 4, 0.4, 0.8232222222222222],
[0.1, 4, 0.6, 0.8232222222222222],
[0.1, 6, 0.4, 0.8168888888888889],
[0.1, 6, 0.6, 0.8202222222222222],
[0.5, 2, 0.4, 0.8164444444444444],
[0.5, 2, 0.6, 0.8182222222222222],
[0.5, 4, 0.4, 0.806],
[0.5, 4, 0.6, 0.802],
[0.5, 6, 0.4, 0.7744444444444445],
[0.5, 6, 0.6, 0.779],
[0.01, 2, 0.4, 0.8242222222222222],
[0.01, 2, 0.6, 0.8245555555555556],
[0.01, 4, 0.4, 0.8191111111111111],
[0.01, 4, 0.6, 0.8221111111111111],
[0.01, 6, 0.4, 0.8193333333333334],
[0.01, 6, 0.6, 0.8191111111111111],
[0.1, 2, 0.4, 0.8252222222222222],
[0.1, 2, 0.6, 0.8263333333333334],
[0.1, 4, 0.4, 0.8225555555555556],
[0.1, 4, 0.6, 0.824],
[0.1, 6, 0.4, 0.819],
[0.1, 6, 0.6, 0.8203333333333334],
[0.5, 2, 0.4, 0.818],
[0.5, 2, 0.6, 0.8183333333333334],
[0.5, 4, 0.4, 0.8055555555555556],
[0.5, 4, 0.6, 0.7985555555555556],
[0.5, 6, 0.4, 0.7787777777777778],
[0.5, 6, 0.6, 0.7761111111111111]]
The GridSearchCV module from Scikit Learn provides many useful features to assist with efficiently undertaking a grid search. You will now put your learning into practice by creating a GridSearchCV object with certain parameters.
The desired options are:
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
# Create a Random Forest Classifier with specified criterion
rf_class = RandomForestClassifier(criterion='entropy')
# Create the parametergrid
param_grid = {
'max_depth':[2, 4, 8, 15],
'max_features':['auto', 'sqrt']
}
# Create a GridSearchCV object
grid_rf_class = GridSearchCV(
estimator=rf_class,
param_grid=param_grid,
scoring='roc_auc',
n_jobs=4,
cv=5,
refit=True,
return_train_score=True
)
print(grid_rf_class)
GridSearchCV(cv=5, estimator=RandomForestClassifier(criterion='entropy'),
n_jobs=4,
param_grid={'max_depth': [2, 4, 8, 15],
'max_features': ['auto', 'sqrt']},
return_train_score=True, scoring='roc_auc')
You will now explore the cv_results_ property of the GridSearchCV object defined in the video. This is a dictionary that we can read into a pandas DataFrame and contains a lot of useful information about the grid search we just undertook.
A reminder of the different column types in this property:
grid_rf_class.fit(X_train, y_train)
# Read the cv_results property into adataframe & print it out
cv_results_df = pd.DataFrame(grid_rf_class.cv_results_)
print(cv_results_df)
# Extract and print the column with a dictionary of hyperparameters used
column = cv_results_df.loc[:, ["params"]]
print(column)
# Extract and print the row that had the best mean test score
best_row = cv_results_df[cv_results_df['rank_test_score'] == 1]
print(best_row)
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
/Users/kakamana/opt/anaconda3/lib/python3.9/site-packages/sklearn/ensemble/_forest.py:424: FutureWarning: `max_features='auto'` has been deprecated in 1.1 and will be removed in 1.3. To keep the past behaviour, explicitly set `max_features='sqrt'` or remove this parameter as it is also the default value for RandomForestClassifiers and ExtraTreesClassifiers.
warn(
mean_fit_time std_fit_time mean_score_time std_score_time \
0 0.553439 0.009132 0.013463 0.000448
1 0.531026 0.009878 0.013239 0.000555
2 0.914235 0.011106 0.017581 0.000717
3 0.945417 0.020631 0.017516 0.000926
4 1.644313 0.017119 0.026537 0.000673
5 1.627506 0.009529 0.026094 0.000421
6 2.613335 0.021025 0.041663 0.000498
7 2.599870 0.028571 0.041664 0.000323
param_max_depth param_max_features \
0 2 auto
1 2 sqrt
2 4 auto
3 4 sqrt
4 8 auto
5 8 sqrt
6 15 auto
7 15 sqrt
params split0_test_score \
0 {'max_depth': 2, 'max_features': 'auto'} 0.766386
1 {'max_depth': 2, 'max_features': 'sqrt'} 0.763033
2 {'max_depth': 4, 'max_features': 'auto'} 0.770331
3 {'max_depth': 4, 'max_features': 'sqrt'} 0.769073
4 {'max_depth': 8, 'max_features': 'auto'} 0.774115
5 {'max_depth': 8, 'max_features': 'sqrt'} 0.772641
6 {'max_depth': 15, 'max_features': 'auto'} 0.766062
7 {'max_depth': 15, 'max_features': 'sqrt'} 0.769417
split1_test_score split2_test_score ... mean_test_score std_test_score \
0 0.762023 0.763215 ... 0.766701 0.003842
1 0.762789 0.761229 ... 0.765009 0.003320
2 0.766002 0.766667 ... 0.771341 0.004908
3 0.766696 0.768804 ... 0.771745 0.004521
4 0.770093 0.777018 ... 0.777718 0.005363
5 0.769696 0.774317 ... 0.776565 0.005625
6 0.765904 0.773864 ... 0.774454 0.007813
7 0.767425 0.775033 ... 0.775002 0.005939
rank_test_score split0_train_score split1_train_score \
0 7 0.768926 0.770673
1 8 0.770699 0.768162
2 6 0.779433 0.779525
3 5 0.777827 0.780215
4 1 0.829605 0.830069
5 2 0.828698 0.827135
6 4 0.974661 0.973459
7 3 0.975158 0.971872
split2_train_score split3_train_score split4_train_score \
0 0.770016 0.768063 0.769712
1 0.768304 0.767662 0.766671
2 0.777968 0.777477 0.777491
3 0.779218 0.777710 0.777653
4 0.828715 0.826782 0.825327
5 0.827792 0.825685 0.827783
6 0.972964 0.973317 0.975256
7 0.974634 0.972400 0.974306
mean_train_score std_train_score
0 0.769478 0.000903
1 0.768300 0.001329
2 0.778379 0.000916
3 0.778525 0.001025
4 0.828100 0.001786
5 0.827418 0.001000
6 0.973932 0.000874
7 0.973674 0.001296
[8 rows x 22 columns]
params
0 {'max_depth': 2, 'max_features': 'auto'}
1 {'max_depth': 2, 'max_features': 'sqrt'}
2 {'max_depth': 4, 'max_features': 'auto'}
3 {'max_depth': 4, 'max_features': 'sqrt'}
4 {'max_depth': 8, 'max_features': 'auto'}
5 {'max_depth': 8, 'max_features': 'sqrt'}
6 {'max_depth': 15, 'max_features': 'auto'}
7 {'max_depth': 15, 'max_features': 'sqrt'}
mean_fit_time std_fit_time mean_score_time std_score_time \
4 1.644313 0.017119 0.026537 0.000673
param_max_depth param_max_features \
4 8 auto
params split0_test_score \
4 {'max_depth': 8, 'max_features': 'auto'} 0.774115
split1_test_score split2_test_score ... mean_test_score std_test_score \
4 0.770093 0.777018 ... 0.777718 0.005363
rank_test_score split0_train_score split1_train_score \
4 1 0.829605 0.830069
split2_train_score split3_train_score split4_train_score \
4 0.828715 0.826782 0.825327
mean_train_score std_train_score
4 0.8281 0.001786
[1 rows x 22 columns]
At the end of the day, we primarily care about the best performing ‘square’ in a grid search. Luckily Scikit Learn’s gridSearchCV objects have a number of parameters that provide key information on just the best square (or row in cv_results_).
Three properties you will explore are:
best_score = grid_rf_class.best_score_
print(best_score)
# Create a variable from the row related to the best-performing square
cv_results_df = pd.DataFrame(grid_rf_class.cv_results_)
best_row = cv_results_df.loc[[grid_rf_class.best_index_]]
print(best_row)
# Get the max_depth parameter from the best-performing square and print
best_max_depth = grid_rf_class.best_params_['max_depth']
print(best_max_depth)
0.777717676012218
mean_fit_time std_fit_time mean_score_time std_score_time \
4 1.644313 0.017119 0.026537 0.000673
param_max_depth param_max_features \
4 8 auto
params split0_test_score \
4 {'max_depth': 8, 'max_features': 'auto'} 0.774115
split1_test_score split2_test_score ... mean_test_score std_test_score \
4 0.770093 0.777018 ... 0.777718 0.005363
rank_test_score split0_train_score split1_train_score \
4 1 0.829605 0.830069
split2_train_score split3_train_score split4_train_score \
4 0.828715 0.826782 0.825327
mean_train_score std_train_score
4 0.8281 0.001786
[1 rows x 22 columns]
8
While it is interesting to analyze the results of our grid search, our final goal is practical in nature; we want to make predictions on our test set using our estimator object.
We can access this object through the best_estimator_ property of our grid search object.
In this exercise we will take a look inside the best_estimator_ property and then use this to make predictions on our test set for credit card defaults and generate a variety of scores. Remember to use predict_proba rather than predict since we need probability values rather than class labels for our roc_auc score. We use a slice [:,1] to get probabilities of the positive class.
from sklearn.metrics import confusion_matrix, roc_auc_score
# See what type of object the best_estimator_property is
print(type(grid_rf_class.best_estimator_))
# Create an array of predictions directly using the best_estimator_property
predictions = grid_rf_class.best_estimator_.predict(X_test)
# Take a look to confirm it worked, this should be an array of 1's and 0's
print(predictions[0:5])
# Now create a confusion matrix
print("Confusion Matrix \n", confusion_matrix(y_test, predictions))
# Get the ROC-AUC score
predictions_proba = grid_rf_class.best_estimator_.predict_proba(X_test)[:, 1]
print("ROC-AUC Score \n", roc_auc_score(y_test, predictions_proba))
<class 'sklearn.ensemble._forest.RandomForestClassifier'>
[0 0 0 0 0]
Confusion Matrix
[[6712 331]
[1248 709]]
ROC-AUC Score
0.7819188805230386